LONDON: Facebook parent company Meta Platforms on Tuesday released an AI model capable of translating and transcribing speech in dozens of languages, a potential building-block for tools enabling real-time communication across language divides.
The company said in a blog post that its SeamlessM4T model could support translations between text and speech in nearly 100 languages, as well as full speech-to-speech translation for 35 languages, including Modern Standard Arabic, Western Persian and Urdu.
Meta has built upon past innovations combining technology that was previously available only in separate models, such as No Language Left Behind (NLLB) and Universal Speech Translator, to create this unified multilingual model.
CEO Mark Zuckerberg has said he envisions such tools facilitating interactions between users from around the globe in the metaverse, the set of interconnected virtual worlds on which he is betting the company’s future.
Meta is making the model available to the public for non-commercial use, the blog post said.
The world’s biggest social media company has released a flurry of mostly free AI models this year, including a large language model called Llama that poses a serious challenge to proprietary models sold by Microsoft-backed OpenAI and Alphabet’s Google.
Zuckerberg says an open AI ecosystem works to Meta’s advantage, as the company has more to gain by effectively crowd-sourcing the creation of consumer-facing tools for its social platforms than by charging for access to the models.
Nonetheless, Meta faces similar legal questions as the rest of the industry around the training data ingested to create its models.
In July, comedian Sarah Silverman and two other authors filed copyright infringement lawsuits against both Meta and OpenAI, accusing the companies of using their books as training data without permission.
For the SeamlessM4T model, Meta researchers said in a research paper that they gathered audio training data from 4 million hours of “raw audio originating from a publicly available repository of crawled web data,” without specifying which repository.
A Meta spokesperson did not respond to questions on the provenance of the audio data. Text data came from datasets created last year that pulled content from Wikipedia and associated websites, the research paper said.
Meta said to have conducted extensive research on mitigating toxicity and bias in its generative AI models, resulting in a model that is more aware of and responsive to potential issues.
Earlier this year, Meta joined forces with Alphabet, Microsoft, and OpenAI to announce a joint framework on the responsible use of AI in order to mitigate the risks associated with generative AI tools.
With Reuters